Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
نویسندگان
چکیده
Imitation learning is an effective approach for autonomous systems to acquire control policies when an explicit reward function is unavailable, using supervision provided as demonstrations from an expert, typically a human operator. However, standard imitation learning methods assume that the agent receives examples of observation-action tuples that could be provided, for instance, to a supervised learning algorithm. This stands in contrast to how humans and animals imitate: we observe another person performing some behavior and then figure out which actions will realize that behavior, compensating for changes in viewpoint, surroundings, and embodiment. We term this kind of imitation learning as imitation-fromobservation and propose an imitation learning method based on video prediction with context translation and deep reinforcement learning. This lifts the assumption in imitation learning that the demonstration should consist of observations and actions in the same environment, and enables a variety of interesting applications, including learning robotic skills that involve tool use simply by observing videos of human tool use. Our experimental results show that our approach can perform imitation-from-observation for a variety of real-world robotic tasks modeled on common household chores, acquiring skills such as sweeping from videos of a human demonstrator. Videos can be found at https://sites.google.com/ site/imitationfromobservation/
منابع مشابه
Dolphin Imitation: Who, What, When, and Why?
The imitative ability of nonhuman animals has intrigued a number of scholars and, in doing so, has generated a considerable amount of controversy. Although it is clear that many species can learn via observational learning, there is a lack of consensus concerning both what sorts of things can be learned by watching others and what types of observational learning should count as imitation. These...
متن کاملOne-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning
Demonstrations provide a descriptive medium for specifying robotic tasks. Prior work has shown that robots can acquire a range of complex skills through demonstration, such as table tennis (Mülling et al., 2013), lane following (Pomerleau, 1989), pouring water (Pastor et al., 2009), drawer opening (Rana et al., 2017), and multi-stage manipulation tasks (Zhang et al., 2018). However, the most ef...
متن کاملMulti-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning fram...
متن کاملTime-Contrastive Networks: Self-Supervised Learning from Video
We propose a self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints, and study how this representation can be used in two robotic imitation settings: imitating object interactions from videos of humans, and imitating human poses. Imitation of human behavior requires a viewpoint-invariant representation that ca...
متن کاملInferring The Latent Structure of Human Decision-Making from Raw Visual Inputs
The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal. Expert demonstrations provided by humans, however, often show significant variability due to latent factors that are typically not explicitly modeled. In this paper, we propose a new algorithm that can infer the latent structure of expert demonstrations in an unsupervised way. Our method, bui...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1707.03374 شماره
صفحات -
تاریخ انتشار 2017